Goto

Collaborating Authors

 problematic content


Telegram's Pavel Durov announces new crackdown on illegal content after arrest

The Guardian

Telegram founder and chief executive Pavel Durov said Monday that the messaging platform had removed more "problematic content" and would take a more proactive approach to complying with government requests. The announcement comes weeks after his arrest in France on charges of failing to act against criminals using the app. Telegram's search feature "has been abused by people who violated our terms of service to sell illegal goods", Durov told the 13 million subscribers of his personal messaging channel. "Over the past few weeks" staff had combed through Telegram using artificial intelligence to ensure "all the problematic content we identified in Search is no longer accessible", he said. Durov added that the platform had updated its terms of service and privacy policy to make clear that it would share infringers' details with authorities – including internet IP addresses and phone numbers – "in response to valid legal requests".


Building AI Safely Is Getting Harder and Harder

The Atlantic - Technology

This is Atlantic Intelligence, an eight-week series in which The Atlantic's leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. The bedrock of the AI revolution is the internet, or more specifically, the ever-expanding bounty of data that the web makes available to train algorithms. ChatGPT, Midjourney, and other generative-AI models "learn" by detecting patterns in massive amounts of text, images, and videos scraped from the internet. The process entails hoovering up huge quantities of books, art, memes, and, inevitably, the troves of racist, sexist, and illicit material distributed across the web. Earlier this week, Stanford researchers found a particularly alarming example of that toxicity: The largest publicly available image data set used to train AIs, LAION-5B, reportedly contains more than 1,000 images depicting the sexual abuse of children, out of more than 5 billion in total.


Users trust AI as much as humans for flagging problematic content

#artificialintelligence

Social media users may trust artificial intelligence (AI) as much as human editors to flag hate speech and harmful content, according to researchers at Penn State. The researchers said that when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower. The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory. "There's this dire need for content moderation on social media and more generally, online media," said Sundar, who is also an affiliate of Penn State's Institute for Computational and Data Sciences.


People who distrust fellow humans show greater trust in artificial intelligence

#artificialintelligence

A person's distrust in humans predicts they will have more trust in artificial intelligence's ability to moderate content online, according to a recently published study. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media. "We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI's classification," said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. "Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias." The study, published in the journal of New Media & Society also found that "power users" who are experienced users of information technology, had the opposite tendency.


Helping Protect Brands and People From Problematic Content Online

#artificialintelligence

As the internet has evolved to incorporate more networks and devices, people everywhere have benefited from greater connection and access to information. But some of the same problems that have concerned humanity throughout history--including hate speech and misinformation--have also taken form online. Fortunately, artificial intelligence is evolving to address these challenges. Some of the latest advancements in AI are helping preserve the integrity of online platforms, prevent harmful and misleading content from reaching people and ensure brand safety for advertisers. Finding problematic content at scale is an incredibly difficult task, but AI is giving marketers tools to do it quickly and more effectively. To encourage user safety and keep harmful content off our platforms, we've established community standards for both Facebook and Instagram.


How Can Artificial Intelligence Improve Social Media?

#artificialintelligence

Today, we will be looking into the relationship artificial intelligence has with social media, and how the entire social media landscape will be disrupted with powerful AI. Businesses and organizations have been ideating on how to use artificial intelligence and machine learning technology for moderating content effectively. Numbers speak themselves with more than three billion social media users, 125 million Twitter users, and more than five billion Google search queries. Machine learning and AI algorithms play a significant role in managing content. Problematic content can be solved with AI-based solutions, and there are many advances made in this field to tackle content moderation and monitoring.


YouTube says computers are catching problem videos

#artificialintelligence

In December, Google said it was hiring 10,000 people in 2018 to address policy violations across its platforms. The vast majority of videos removed from YouTube toward the end of last year for violating the site's content guidelines had first been detected by machines instead of humans, the Google-owned company said. YouTube said it took down 8.28 million videos during the fourth quarter of 2017, and about 80 per cent of those videos had initially been flagged by artificially intelligent computer systems. The new data highlighted the significant role machines, not just users, government agencies and other organisations, were taking in policing the service as it faced increased scrutiny over the spread of conspiracy videos, fake news and violent content from extremist organisations. Those videos are sometimes promoted by YouTube's recommendation system and unknowingly financed by advertisers, whose ads are placed next to them through an automated system.